311 research outputs found

    Activity Identification and Local Linear Convergence of Douglas--Rachford/ADMM under Partial Smoothness

    Full text link
    Convex optimization has become ubiquitous in most quantitative disciplines of science, including variational image processing. Proximal splitting algorithms are becoming popular to solve such structured convex optimization problems. Within this class of algorithms, Douglas--Rachford (DR) and alternating direction method of multipliers (ADMM) are designed to minimize the sum of two proper lower semi-continuous convex functions whose proximity operators are easy to compute. The goal of this work is to understand the local convergence behaviour of DR (resp. ADMM) when the involved functions (resp. their Legendre-Fenchel conjugates) are moreover partly smooth. More precisely, when both of the two functions (resp. their conjugates) are partly smooth relative to their respective manifolds, we show that DR (resp. ADMM) identifies these manifolds in finite time. Moreover, when these manifolds are affine or linear, we prove that DR/ADMM is locally linearly convergent. When JJ and GG are locally polyhedral, we show that the optimal convergence radius is given in terms of the cosine of the Friedrichs angle between the tangent spaces of the identified manifolds. This is illustrated by several concrete examples and supported by numerical experiments.Comment: 17 pages, 1 figure, published in the proceedings of the Fifth International Conference on Scale Space and Variational Methods in Computer Visio

    From error bounds to the complexity of first-order descent methods for convex functions

    Get PDF
    This paper shows that error bounds can be used as effective tools for deriving complexity results for first-order descent methods in convex minimization. In a first stage, this objective led us to revisit the interplay between error bounds and the Kurdyka-\L ojasiewicz (KL) inequality. One can show the equivalence between the two concepts for convex functions having a moderately flat profile near the set of minimizers (as those of functions with H\"olderian growth). A counterexample shows that the equivalence is no longer true for extremely flat functions. This fact reveals the relevance of an approach based on KL inequality. In a second stage, we show how KL inequalities can in turn be employed to compute new complexity bounds for a wealth of descent methods for convex problems. Our approach is completely original and makes use of a one-dimensional worst-case proximal sequence in the spirit of the famous majorant method of Kantorovich. Our result applies to a very simple abstract scheme that covers a wide class of descent methods. As a byproduct of our study, we also provide new results for the globalization of KL inequalities in the convex framework. Our main results inaugurate a simple methodology: derive an error bound, compute the desingularizing function whenever possible, identify essential constants in the descent method and finally compute the complexity using the one-dimensional worst case proximal sequence. Our method is illustrated through projection methods for feasibility problems, and through the famous iterative shrinkage thresholding algorithm (ISTA), for which we show that the complexity bound is of the form O(qk)O(q^{k}) where the constituents of the bound only depend on error bound constants obtained for an arbitrary least squares objective with 1\ell^1 regularization

    Convergence acceleration for multiobjective sparse reconstruction via knowledge transfer

    Full text link
    © Springer Nature Switzerland AG 2019. Multiobjective sparse reconstruction (MOSR) methods can potentially obtain superior reconstruction performance. However, they suffer from high computational cost, especially in high-dimensional reconstruction. Furthermore, they are generally implemented independently without reusing prior knowledge from past experiences, leading to unnecessary computational consumption due to the re-exploration of similar search spaces. To address these problems, we propose a sparse-constraint knowledge transfer operator to accelerate the convergence of MOSR solvers by reusing the knowledge from past problem-solving experiences. Firstly, we introduce the deep nonlinear feature coding method to extract the feature mapping between the search of the current problem and a previously solved MOSR problem. Through this mapping, we learn a set of knowledge-induced solutions which contain the search experience of the past problem. Thereafter, we develop and apply a sparse-constraint strategy to refine these learned solutions to guarantee their sparse characteristics. Finally, we inject the refined solutions into the iteration of the current problem to facilitate the convergence. To validate the efficiency of the proposed operator, comprehensive studies on extensive simulated signal reconstruction are conducted

    Stochastic Bundle Adjustment for Efficient and Scalable 3D Reconstruction

    Full text link
    Current bundle adjustment solvers such as the Levenberg-Marquardt (LM) algorithm are limited by the bottleneck in solving the Reduced Camera System (RCS) whose dimension is proportional to the camera number. When the problem is scaled up, this step is neither efficient in computation nor manageable for a single compute node. In this work, we propose a stochastic bundle adjustment algorithm which seeks to decompose the RCS approximately inside the LM iterations to improve the efficiency and scalability. It first reformulates the quadratic programming problem of an LM iteration based on the clustering of the visibility graph by introducing the equality constraints across clusters. Then, we propose to relax it into a chance constrained problem and solve it through sampled convex program. The relaxation is intended to eliminate the interdependence between clusters embodied by the constraints, so that a large RCS can be decomposed into independent linear sub-problems. Numerical experiments on unordered Internet image sets and sequential SLAM image sets, as well as distributed experiments on large-scale datasets, have demonstrated the high efficiency and scalability of the proposed approach. Codes are released at https://github.com/zlthinker/STBA.Comment: Accepted by ECCV 202
    corecore